EN FR
EN FR


Section: New Results

High performance simulation for plasma physics

Participants : Rémi Abgrall, Robin Huart, Xavier Lacoste, François Pellegrini, Pierre Ramet [Corresponding member] .

In the RealfluiDS code, the Rusanov scheme for Ideal MHD has already shown its ability to capture discontinuities and its robustness many times in 2D problems  [57] . But other spatial schemes could be interesting for applications in tokamak experiments, since we may not encounter strong shocks in these cases. Hence, according to the type of the problem, coupled schemes could be used. We already developed the 4 well known base RD schemes : Narrow, LDA, Rusanov and SU (a RD version of the SUPG scheme). Coupling may not be challenging, a working shock sensor is already implemented for stabilized methods. Very high order of accuracy (at least 3rd order) should be reachable in all cases, the main parts of this work have already been done for several types of elements. The non-dimensionalized equations of resistive MHD (with viscosity and heat transfer) have been added to the code with a Continuous Galerkin discretization. Also, 2nd order implicit and explicit methods were developed in all cases. Once we succeed in ensuring a very good iterative convergence, taking into account the hyperbolic divergence cleaning technique in an unsteady context, we will be able to simulate plasma instabilities. This is really the key issue for now. These results will be presented in the PhD defense of R. Huart planned at january 2012.

The JOREK code is now able to use several hundred of processors routinely. Simulations of ELMs are produced taking into account the X-point geometry with both closed and open field lines. But a higher toroidal resolution is required for the resolution of the fine scale filaments that form during the ELM instability. The complexity of the tokamak's geometry and the fine mesh that is required leads to prohibitive memory requirements. In the current release, the memory scaling is not satisfactory: as one increases the number of processes for a given problem size, the memory footprint on each process does not reduce as much as one can expect.

In the context of the new ANR proposal (ANEMOS project), we are working to reduce memory consumers in the JOREK code. Compression techniques can be foreseen to reduce the footprint of the matrix without having to pay large computation expenses. Moreover, the storage of the factorized preconditioning matrix inside the direct solver takes also a large amount of memory. We have defined and developped a general programming interface for sparse linear solvers (http://murge.gforge.inria.fr ) for which we also provided some test programs and documentation. Our goal is to normalize the application programming interface of sparse linear solvers and to provide some very simple ways of doing some fastidious tasks such as parallel matrix assembly for instance. This interface has been validated in RealfluiDS and JOREK for HIPS and PaStiX . Using this common interface, we are looking for a fair distribution of data over the parallel processes in order to reduce memory consumption. The effective parallelization of this assembly step is one of the main bottlenecks up to now, as far as memory usage is concerned. The GMRES driver is also a large consumer in terms of memory and we plan to consider an up-to-date parallel implementation of this step.